On fast convergence rates for generalized conditional gradient methods with backtracking stepsize

نویسندگان

چکیده

<p style='text-indent:20px;'>A generalized conditional gradient method for minimizing the sum of two convex functions, one them differentiable, is presented. This iterative relies on main ingredients: First, minimization a partially linearized objective functional to compute descent direction and, second, stepsize choice based an Armijo-like condition ensure sufficient in every iteration. We provide several convergence results. Under mild assumptions, generates sequences iterates which converge, subsequences, towards minimizers. Moreover, sublinear rate values derived. Second, we show that enjoys improved rates if problem fulfills certain growth estimates. Most notably these results do not require strong convexity functional. Numerical tests variety challenging PDE-constrained optimization problems confirm practical efficiency proposed algorithm.</p>

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gradient algorithms for quadratic optimization with fast convergence rates

We propose a family of gradient algorithms for minimizing a quadratic function f(x) = (Ax, x)/2− (x, y) in R or a Hilbert space, with simple rules for choosing the step-size at each iteration. We show that when the step-sizes are generated by a dynamical system with ergodic distribution having the arcsine density on a subinterval of the spectrum of A, the asymptotic rate of convergence of the a...

متن کامل

Generalized conditional gradient: analysis of convergence and applications

The objectives of this technical report is to provide additional results on the generalized conditional gradient methods introduced by Bredies et al. [BLM05]. Indeed, when the objective function is smooth, we provide a novel certificate of optimality and we show that the algorithm has a linear convergence rate. Applications of this algorithm are also discussed. 1 Generalized conditional gradien...

متن کامل

Convergence of Conjugate Gradient Methods with a Closed-Form Stepsize Formula

Conjugate gradient methods are efficient methods for minimizing differentiable objective functions in large dimension spaces. However, converging line search strategies are usually not easy to choose, nor to implement. Sun and colleagues (Ann. Oper. Res. 103:161–173, 2001; J. Comput. Appl. Math. 146:37–45, 2002) introduced a simple stepsize formula. However, the associated convergence domain ha...

متن کامل

Efficient Generalized Conditional Gradient with Gradient Sliding for Composite Optimization

Generalized conditional gradient method has regained increasing research interest as an alternative to another popular proximal gradient method for sparse optimization problems. For particular tasks, its low computation cost of linear subproblem evaluation on each iteration leads to superior practical performance. However, the inferior iteration complexity incurs excess number of gradient evalu...

متن کامل

Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization

We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Numerical Algebra, Control and Optimization

سال: 2022

ISSN: ['2155-3297', '2155-3289']

DOI: https://doi.org/10.3934/naco.2022026